1,270 research outputs found

    A Revised Design for Microarray Experiments to Account for Experimental Noise and Uncertainty of Probe Response

    Get PDF
    Background Although microarrays are analysis tools in biomedical research, they are known to yield noisy output that usually requires experimental confirmation. To tackle this problem, many studies have developed rules for optimizing probe design and devised complex statistical tools to analyze the output. However, less emphasis has been placed on systematically identifying the noise component as part of the experimental procedure. One source of noise is the variance in probe binding, which can be assessed by replicating array probes. The second source is poor probe performance, which can be assessed by calibrating the array based on a dilution series of target molecules. Using model experiments for copy number variation and gene expression measurements, we investigate here a revised design for microarray experiments that addresses both of these sources of variance. Results Two custom arrays were used to evaluate the revised design: one based on 25 mer probes from an Affymetrix design and the other based on 60 mer probes from an Agilent design. To assess experimental variance in probe binding, all probes were replicated ten times. To assess probe performance, the probes were calibrated using a dilution series of target molecules and the signal response was fitted to an adsorption model. We found that significant variance of the signal could be controlled by averaging across probes and removing probes that are nonresponsive or poorly responsive in the calibration experiment. Taking this into account, one can obtain a more reliable signal with the added option of obtaining absolute rather than relative measurements. Conclusion The assessment of technical variance within the experiments, combined with the calibration of probes allows to remove poorly responding probes and yields more reliable signals for the remaining ones. Once an array is properly calibrated, absolute quantification of signals becomes straight forward, alleviating the need for normalization and reference hybridizations

    "Hook"-calibration of GeneChip-microarrays: Theory and algorithm

    Get PDF
    Abstract Background: The improvement of microarray calibration methods is an essential prerequisite for quantitative expression analysis. This issue requires the formulation of an appropriate model describing the basic relationship between the probe intensity and the specific transcript concentration in a complex environment of competing interactions, the estimation of the magnitude these effects and their correction using the intensity information of a given chip and, finally the development of practicable algorithms which judge the quality of a particular hybridization and estimate the expression degree from the intensity values. Results: We present the so-called hook-calibration method which co-processes the log-difference (delta) and -sum (sigma) of the perfect match (PM) and mismatch (MM) probe-intensities. The MM probes are utilized as an internal reference which is subjected to the same hybridization law as the PM, however with modified characteristics. After sequence-specific affinity correction the method fits the Langmuir-adsorption model to the smoothed delta-versus-sigma plot. The geometrical dimensions of this so-called hook-curve characterize the particular hybridization in terms of simple geometric parameters which provide information about the mean non-specific background intensity, the saturation value, the mean PM/MM-sensitivity gain and the fraction of absent probes. This graphical summary spans a metrics system for expression estimates in natural units such as the mean binding constants and the occupancy of the probe spots. The method is single-chip based, i.e. it separately uses the intensities for each selected chip. Conclusion: The hook-method corrects the raw intensities for the non-specific background hybridization in a sequence-specific manner, for the potential saturation of the probe-spots with bound transcripts and for the sequence-specific binding of specific transcripts. The obtained chip characteristics in combination with the sensitivity corrected probe-intensity values provide expression estimates scaled in natural units which are given by the binding constants of the particular hybridization.</p

    The impacts of climate change on river flood risk at the global scale

    Get PDF
    This paper presents an assessment of the implications of climate change for global river flood risk. It is based on the estimation of flood frequency relationships at a grid resolution of 0.5 × 0.5°, using a global hydrological model with climate scenarios derived from 21 climate models, together with projections of future population. Four indicators of the flood hazard are calculated; change in the magnitude and return period of flood peaks, flood-prone population and cropland exposed to substantial change in flood frequency, and a generalised measure of regional flood risk based on combining frequency curves with generic flood damage functions. Under one climate model, emissions and socioeconomic scenario (HadCM3 and SRES A1b), in 2050 the current 100-year flood would occur at least twice as frequently across 40 % of the globe, approximately 450 million flood-prone people and 430 thousand km2 of flood-prone cropland would be exposed to a doubling of flood frequency, and global flood risk would increase by approximately 187 % over the risk in 2050 in the absence of climate change. There is strong regional variability (most adverse impacts would be in Asia), and considerable variability between climate models. In 2050, the range in increased exposure across 21 climate models under SRES A1b is 31–450 million people and 59 to 430 thousand km2 of cropland, and the change in risk varies between −9 and +376 %. The paper presents impacts by region, and also presents relationships between change in global mean surface temperature and impacts on the global flood hazard. There are a number of caveats with the analysis; it is based on one global hydrological model only, the climate scenarios are constructed using pattern-scaling, and the precise impacts are sensitive to some of the assumptions in the definition and application

    Position dependent mismatch discrimination on DNA microarrays – experiments and model

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The propensity of oligonucleotide strands to form stable duplexes with complementary sequences is fundamental to a variety of biological and biotechnological processes as various as microRNA signalling, microarray hybridization and PCR. Yet our understanding of oligonucleotide hybridization, in particular in presence of surfaces, is rather limited. Here we use oligonucleotide microarrays made in-house by optically controlled DNA synthesis to produce probe sets comprising all possible single base mismatches and base bulges for each of 20 sequence motifs under study.</p> <p>Results</p> <p>We observe that mismatch discrimination is mostly determined by the defect position (relative to the duplex ends) as well as by the sequence context. We investigate the thermodynamics of the oligonucleotide duplexes on the basis of double-ended molecular zipper. Theoretical predictions of defect positional influence as well as long range sequence influence agree well with the experimental results.</p> <p>Conclusion</p> <p>Molecular zipping at thermodynamic equilibrium explains the binding affinity of mismatched DNA duplexes on microarrays well. The position dependent nearest neighbor model (PDNN) can be inferred from it. Quantitative understanding of microarray experiments from first principles is in reach.</p

    Study on Composition Distribution and Ferromagnetism of Monodisperse FePt Nanoparticles

    Get PDF
    Monodisperse FePt nanoparticles with size of 4.5 and 6.0 nm were prepared by simultaneous reduction of platinum acetylacetonate and thermal decomposition of iron pentacarbonyl in benzylether. The crystallography structure, size, and composition of the FePt nanoparticles were examined by X-ray diffraction and transmission electron microscopy. Energy dispersive X-ray spectrometry measurements of individual particles indicate a broad compositional distribution in both the 4.5 and 6 nm FePt nanoparticles. The effects of compositional distribution on the phase-transition and magnetic properties of the FePt nanoparticles were investigated

    Incidence of post myocardial infarction left ventricular thrombus formation in the era of primary percutaneous intervention and glycoprotein IIb/IIIa inhibitors. A prospective observational study

    Get PDF
    BACKGROUND: Before the widespread use of primary percutaneous coronary intervention (PCI) and glycoprotein IIb/IIIa inhibitors (GP IIb/IIIa) left ventricular (LV) thrombus formation had been reported to complicate up to 20% of acute myocardial infarctions (AMI). The incidence of LV thrombus formation with these treatment modalities is not well known. METHODS: 92 consecutive patients with ST-elevation AMI treated with PCI and GP IIb/IIIa inhibitors underwent 2-D echocardiograms, with and without echo contrast agent, within 24–72 hours. RESULTS: Only 4/92 (4.3%) had an LV thrombus, representing a significantly lower incidence than that reported in the pre-PCI era. Use of contrast agents did not improve detection of LV thrombi in our study. CONCLUSION: The incidence of LV thrombus formation after acute MI, in the current era of rapid reperfusion, is lower than what has been historically reported

    Application of Equilibrium Models of Solution Hybridization to Microarray Design and Analysis

    Get PDF
    Background: The probe percent bound value, calculated using multi-state equilibrium models of solution hybridization, is shown to be useful in understanding the hybridization behavior of microarray probes having 50 nucleotides, with and without mismatches. These longer oligonucleotides are in widespread use on microarrays, but there are few controlled studies of their interactions with mismatched targets compared to 25-mer based platforms. Principal Findings: 50-mer oligonucleotides with centrally placed single, double and triple mismatches were spotted on an array. Over a range of target concentrations it was possible to discriminate binding to perfect matches and mismatches, and the type of mismatch could be predicted accurately in the concentration midrange (100 pM to 200 pM) using solution hybridization modeling methods. These results have implications for microarray design, optimization and analysis methods. Conclusions: Our results highlight the importance of incorporating biophysical factors in both the design and the analysis of microarrays. Use of the probe ‘‘percent bound’ ’ value predicted by equilibrium models of hybridization is confirmed to be important for predicting and interpreting the behavior of long oligonucleotide arrays, as has been shown for shor

    Hamiltonicity below Dirac's condition

    Get PDF
    Dirac's theorem (1952) is a classical result of graph theory, stating that an nn-vertex graph (n≥3n \geq 3) is Hamiltonian if every vertex has degree at least n/2n/2. Both the value n/2n/2 and the requirement for every vertex to have high degree are necessary for the theorem to hold. In this work we give efficient algorithms for determining Hamiltonicity when either of the two conditions are relaxed. More precisely, we show that the Hamiltonian cycle problem can be solved in time ck⋅nO(1)c^k \cdot n^{O(1)}, for some fixed constant cc, if at least n−kn-k vertices have degree at least n/2n/2, or if all vertices have degree at least n/2−kn/2-k. The running time is, in both cases, asymptotically optimal, under the exponential-time hypothesis (ETH). The results extend the range of tractability of the Hamiltonian cycle problem, showing that it is fixed-parameter tractable when parameterized below a natural bound. In addition, for the first parameterization we show that a kernel with O(k)O(k) vertices can be found in polynomial time

    The sensitivity of the tropical circulation and Maritime Continent precipitation to climate model resolution

    Get PDF
    The dependence of the annual mean tropical precipitation on horizontal resolution is investigated in the atmospheric version of the Hadley Centre General Environment Model (HadGEM1). Reducing the grid spacing from about 350 km to 110 km improves the precipitation distribution in most of the tropics. In particular, characteristic dry biases over South and Southeast Asia including the Maritime Continent as well as wet biases over the western tropical oceans are reduced. The annual-mean precipitation bias is reduced by about one third over the Maritime Continent and the neighbouring ocean basins associated with it via the Walker circulation. Sensitivity experiments show that much of the improvement with resolution in the Maritime Continent region is due to the specification of better resolved surface boundary conditions (land fraction, soil and vegetation parameters) at the higher resolution. It is shown that in particular the formulation of the coastal tiling scheme may cause resolution sensitivity of the mean simulated climate. The improvement in the tropical mean precipitation in this region is not primarily associated with the better representation of orography at the higher resolution, nor with changes in the eddy transport of moisture. Sizeable sensitivity to changes in the surface fields may be one of the reasons for the large variation of the mean tropical precipitation distribution seen across climate models
    • …
    corecore